Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Kernelized correlation filtering method based on fast discriminative scale estimation
XIONG Xiaoxuan, WANG Wenwei
Journal of Computer Applications    2019, 39 (2): 546-550.   DOI: 10.11772/j.issn.1001-9081.2018061360
Abstract403)      PDF (881KB)(270)       Save
Focusing on the issue that the Kernelized Correlation Filter (KCF) can not respond to the target scale change, a KCF target tracking algorithm based on fast discriminative scale estimation was proposed. Firstly, the target position was estimated by KCF. Then, a fast discriminative scale filter was learned online by using a set of target samples with different scales. Finally, an accurate estimation of the target size was obtained by applying the learned scale filter at the target position. The experiments were conducted on Visual Tracker Benchmark video sequence sets, and comparison was performed with the KCF algorithm based on Discriminative Scale Space Tracking (DSST) and the traditional KCF algorithm. Experimental results show that the tracking accuracy of the proposed algorithm is 2.2% to 10.8% higher than that of two contrast algorithms when the target scale changes, and the average frame rate of the proposed algorithm is also 19.1% to 68.5% higher than that of KCF algorithm based on DSST. The proposed algorithm has strong adaptability and high real-time performance to target scale change.
Reference | Related Articles | Metrics
High-speed method for extracting center of line structured light
SU Xiaoqin, XIONG Xianming
Journal of Computer Applications    2016, 36 (1): 238-242.   DOI: 10.11772/j.issn.1001-9081.2016.01.0238
Abstract622)      PDF (718KB)(408)       Save
The speed and accuracy of extracting the center of line structured light has a direct effect on integral performance of the system in three-dimensional measurement of line structured light. Based on geometrical center method, direction template method and barycenter method, a high-speed algorithm of extracting the center of line structured light was proposed. First, the image was preprocessed, and the center of line structured light was fast extracted as contour line by geometrical center method. Then a normal detection method based on location was designed to extract normal direction of contour line. Compared with direction template method, the calculating speed of the proposed algorithm significantly improved. Finally, the center of structured light was extracted precisely through gray weight of normal direction pixel on contour line. The experimental results indicate that the proposed algorithm's speed has significantly boosted in processing, and its precision reaches sub-pixel level.
Reference | Related Articles | Metrics
Classification method for interval uncertain data based on improved naive Bayes
LI Wenjin XIONG Xiaofeng MAO Yimin
Journal of Computer Applications    2014, 34 (11): 3268-3272.   DOI: 10.11772/j.issn.1001-9081.2014.11.3268
Abstract155)      PDF (711KB)(550)       Save

Considering the high computation complexity and storage requirement of Naive Bayes (NB) based on Parzen Window Estimation (PWE), especially for classification on interval uncertain data, an improved method named IU-PNBC was proposed for classifying the interval uncertain data. Firstly, Class-Conditional Probability Density Function (CCPDF) was estimated by using PWE. Secondly, an approximate function for CCPDF was obtained by using algebraic interpolation. Finally, the posterior probability was computed and used for classification by using the approximate interpolation function. Artificial simulation data and UCI standard dataset were used to assume the rationality of the proposed algorithm and the affection of the interpolation points to classification accuracy of IU-PNBC. The experimental results show that: when the interpolation points are more than 15, the accuracy of IU-PNBC tends to be stable, and the accuracy increases with the increase of the interpolation points; IU-PNBC can avoid the dependence on the training samples and improve the computation efficiency effectively. Thus, IU-PNBC is suitable for classification on large interval uncertain data with lower computation complexity and storage requirement than NB based on Parzen window estimation.

Reference | Related Articles | Metrics
Least cache value replacement algorithm
LIU Lei XIONG Xiaopeng
Journal of Computer Applications    2013, 33 (04): 1018-1022.   DOI: 10.3724/SP.J.1087.2013.01018
Abstract613)      PDF (788KB)(738)       Save
In order to improve the performance of cache for search applications, this paper proposed a new replacement algorithm — the Least Cache Value (LCV) algorithm. The algorithm took into account the object access frequency and size of the object. The set of objects in the cache which has the minimum contribution to Byte Hit Ratio (BHR) should have priority to be replaced. Moreover, the selection of optimal replacement set of objects was transformed into classical 0-1 knapsack problems and a rapid approximate solution and the data structure of algorithm were given. The experiment proves that the LCV algorithm has better performance in increasing BHR and reducing Average Latency Time (ALT) than algorithms of LRU (Least Recently Used), FIFO (First-In First-Out) and GD-Size (Greed Dual-Size).
Reference | Related Articles | Metrics
Blog community detection based on formal concept analysis
LIU Zhaoqing FU Yuchen LING Xinghong XIONG Xiangyun
Journal of Computer Applications    2013, 33 (01): 189-191.   DOI: 10.3724/SP.J.1087.2013.00189
Abstract994)      PDF (631KB)(545)       Save
Several problems exist in trawling algorithm, such as too many Web communities, high repetition rate between community-cores and isolated community formed by strict definition of community. Thus, an algorithm detecting Blog community based on Formal Concept Analysis (FCA) was proposed. Firstly, concept lattice was formed according to the linkage relations between Blogs,then clusters were divided from the lattice based on equivalence relation, finally communities were clustered in each cluster based on the similarity of concepts. The experimental results show that, the community-cores, which network density is greater than 40%, occupied 83.420% of all in testing data set, the network diameter of combined community is 3, and the content of community gets enriched significantly. The proposed algorithm can be effectively used to detect communities in Blog, micro-Blog and other social networks, and it has significant application value and practical meaning.
Reference | Related Articles | Metrics
BTopicMiner: domain-specific topic mining system for Chinese microblog
LI Jin ZHANG Hua WU Hao-xiong XIANG Jun
Journal of Computer Applications    2012, 32 (08): 2346-2349.  
Abstract1438)      PDF (725KB)(790)       Save
As microblog application grows rapidly, how to extract users' interested popular topic from massive microblog information automatically becomes a challenging research area. This paper studied and proposed a topic extraction algorithm of Chinese microblog based on extended topic model. In order to deal with data sparse problem of microblog, the content related microblog text would be firstly clustered to generate synthetic document. Based on the assumption that posting relationship among microblogs implied topical correlation, the traditional LDA (Latent Dirichlet Allocation) topic model was extended to model the posting relationship among microblogs. At last, Mutual Information (MI) measurement was utilized to calculate topic vocabulary after extracting topics by proposing extended LDA topic model for topic recommendation. Furthermore, a prototype system for domain-specific topical mining system, named BTopicMiner, was implemented so as to verify the effectiveness of the proposed algorithm. The experimental result shows that the proposed algorithm can extract topics from microblogs more accurately. Meanwhile, the semantic similarity between automatically calculated topic vocabulary and manually selected topic vocabulary exceeds 75% while automatically calculating topic vocabulary based on MI.
Reference | Related Articles | Metrics
Text classification model framework based on social annotation quality
LI Jin ZHANG Hua WU Hao-xiong XIANG Jun GU Xi-wu
Journal of Computer Applications    2012, 32 (05): 1335-1339.  
Abstract1067)      PDF (2726KB)(680)       Save
Social annotation is a form of folksonomy, which allows Web users to categorize Web resource with text tags freely. It usually implicates fundamental and valuable semantic information of Web resources. Consequently, social annotation is helpful to improve the quality of information retrieval when applied to information retrieval system. This paper investigated and proposed an improved text classification algorithm based on social annotation. Because social annotation is a kind of folksonomy and social tags are usually generated arbitrarily without any control or expertise knowledge, there has been significant variance in the quality of social tags. Under this consideration, the paper firstly proposed a quantitative approach to measure the quality of social tags by utilizing the semantic similarity between Web pages and social tags. After that, the social tags with relatively low quality were filtered out based on the quality measurement and the remained social tags with high quality were applied to extend traditional vector space model. In the extended vector space model, a Web page was represented by a vector in which the components were the words in the Web page and tags tagged to the Web page. At last, the support vector machine algorithm was employed to perform the classification task. The experimental results show that the classification result can be improved after filtering out the social tags with low quality and embedding those high quality social tags into the traditional vector space model. Compared with other classification approaches, the classification result of F1 measurement has increased by 6.2% on average when using the proposed algorithm.
Reference | Related Articles | Metrics
Formal description and analysis of conformance of composite Web service behavior
LI Jin ZHANG Hua WU Hao-xiong XIANG Jun
Journal of Computer Applications    2012, 32 (02): 545-550.   DOI: 10.3724/SP.J.1087.2012.00545
Abstract968)      PDF (931KB)(456)       Save
Web service choreography and orchestration defines the global interaction of composite Web service and the local behavior of each participant from global and local perspectives, respectively. The conformance of each participant's local behavior to global interaction is the guarantee of the correctness of Web service composition. The paper firstly presented a set of definitions to formally describe the global interaction of composite Web service, the local behavior of each participant and the mapping rules between them based on process algebra. Accordingly, two formal judgmental rules for the conformance of each participant's local behavior to global interaction were proposed. The two formal rules were based on the relationship between the transition of global interaction and local process and bisimulation theorem. At last, the conformance formal checking approach was described through a case study. The result of the case study shows that the proposed conformance definition of Web service composition and conformance checking approach can formally check the conformance of Web service composition effectively. As a result, the correctness of Web service composition can be guaranteed.
Related Articles | Metrics
Illumination-adaptive skin color detection method
XIONG Xia SANG Qing-bing
Journal of Computer Applications    2011, 31 (05): 1233-1236.   DOI: 10.3724/SP.J.1087.2011.01233
Abstract1256)      PDF (639KB)(899)       Save
Through the study on skin features under different light environments in YCgCr color space, it was found that the skin pixels have different Cg and Cr areas in different environments. The correlation matrix was used to estimate the light environment. The different skin segmentation methods were ultilized according to the result of environment detection, and dynamic threshold was composed by between-cluster variance and within-class scatter. Compared with the traditional methods, the proposed method fully reduces the impact of color distortion in different light. The experimental results show that it has higher accuracy and lower detection error rate.
Related Articles | Metrics